Training data optimization for language model adaptation
نویسندگان
چکیده
Language model (LM) adaptation is a necessary step when the LM is applied to speech recognition. The task of LM adaptation is to use out-domain data to improve in-domain model’s performance since the available in-domain (taskspecific) data set is usually not large enough for LM training. LM adaptation faces two problems. One is the poor quality of the out-domain training data. The other is the mismatch between the n-gram distribution in out-domain data set and that in in-domain data set. This paper presents two methods, filtering and distribution adaptation, to solve them respectively. First, a bootstrapping method is presented to filter suitable portion from two large variable quality out-domain data sets for our task. Then a new algorithm is proposed to adjust the ngram distribution of the two data sets to that of a task–specific but small data set. We consider preventing over-fitting problem in adaptation. All resulting models are evaluated on the realistic application of email dictation. Experiments show that each method achieves better performance, and the combined method achieves a perplexity reduction of 24% to 80%.
منابع مشابه
Speaker Adaptation in Continuous Speech Recognition Using MLLR-Based MAP Estimation
A variety of methods are used for speaker adaptation in speech recognition. In some techniques, such as MAP estimation, only the models with available training data are updated. Hence, large amounts of training data are required in order to have significant recognition improvements. In some others, such as MLLR, where several general transformations are applied to model clusters, the results ar...
متن کاملSpeaker Adaptation in Continuous Speech Recognition Using MLLR-Based MAP Estimation
A variety of methods are used for speaker adaptation in speech recognition. In some techniques, such as MAP estimation, only the models with available training data are updated. Hence, large amounts of training data are required in order to have significant recognition improvements. In some others, such as MLLR, where several general transformations are applied to model clusters, the results ar...
متن کاملUnsupervised adaptation for acoustic language identification
Our system for automatic language identification (LID) of spoken utterances is performed with language dependent parallel phoneme recognition (PPR) using Hidden Markov Model (HMM) phoneme recognizers and optional phoneme language models (LMs). Such a LID system for continuous speech requires many hours of orthographically transcribed data for training of language dependent HMMs and LMs as well ...
متن کاملIncremental Adaptation Strategies for Neural Network Language Models
It is today acknowledged that neural network language models outperform backoff language models in applications like speech recognition or statistical machine translation. However, training these models on large amounts of data can take several days. We present efficient techniques to adapt a neural network language model to new data. Instead of training a completely new model or relying on mix...
متن کاملOPTIMAL DESIGN OF ARCH DAMS BY COMBINING PARTICLE SWARM OPTIMIZATION AND GROUP METHOD OF DATA HANDLING
Optimization techniques can be efficiently utilized to achieve an optimal shape for arch dams. This optimal design can consider the conditions of the economy and safety simultaneously. The main aim is to present an applicable and practical model and suggest an algorithm for optimization of concrete arch dams to enhance their seismic performance. To achieve this purpose, a preliminary optimizati...
متن کامل